Loading...
 

Stabilization of the Stokes problem with the residual minimization method

Consider the Stokes problem that requires additional stabilization. Let us write it as a single functional
\( B(({\bf u},p),({\bf v},q)) = L({\bf v }) \)

where
\( B(({\bf u},p),({\bf v},q)) = \left( \begin{matrix} a({\bf u}, {\bf v}) + b(p, {\bf v}) \\ b(q, {\bf u} ) \\ \end{matrix} \right) \)
\( L({\bf v }) = \left( \begin{matrix} f({\bf v} \\ 0 \\ \end{matrix} \right) \)
and
\( a({\bf u}, {\bf v }) =\int_{\Omega} \frac{\partial u_1}{\partial x}\frac{\partial v_1}{\partial x}dxdy + \int_{\Omega} \frac{\partial u_1}{\partial y}\frac{\partial v_1}{\partial y}dxdy + \int_{\Omega} \frac{\partial u_2}{\partial x}\frac{\partial v_2}{\partial x}dxdy + \int_{\Omega} \frac{\partial u_2}{\partial y}\frac{\partial v_2}{\partial y }dxdy \)
\( b(p, {\bf v }) = \int_{\Omega}p \frac{\partial v_1}{\partial x} dxdy+ \int_{\Omega} p \frac{\partial v_2}{\partial y } dxdy \)
We denote by \( {\bf U} \times P \) function space in which we are looking for a solution to the Stokes problem \( ({\bf u},p) \in {\bf U } \times P \).
Then we denote by \( {\bf V } \times Q \) space of functions that we use for testing in our variational (also called weak) formulation.
In this chapter we will describe, using an abstract mathematical language, how the method of minimizing the residual is derived [1].
Our original problem can be written as follows: we are looking for a function \( ({\bf u},p) \in {\bf U } \times P \) such that \( B( ({\bf u},p),({\bf v},q) ) = L({\bf v }) \) for all test functions \( ({\bf v },q) \in V\times Q \).
You can mark the left side mathematically \( B(u,p) \) and the right side \( L \) by introducing certain mathematical operators (mathematicians will say that \( B : ({\bf U}\times P) \rightarrow (V\times Q)' \) and \( \langle B({\bf u},p), ({\bf v},q) \rangle =B(({\bf u},p),({\bf v },q)) \).
Then our problem reduces to finding \( ({\bf u},p) \) such that
\( B({\bf u },p)-l=0 \). This expression is called a residuum \( r=B({\bf u },p)-l \).
If we only have an approximate solution, for example denoted \( ({\bf u_h },p_h) \)
then our residual will express an error, the greater the residual, the greater the error of the approximate solution \( r_h=B({\bf u_h },p_h)-l \). How to measure the solution error, i.e. expressed by the residuum?
To measure how big the residual is, we need to introduce some norm (i.e. a measure of size)
\( r_h = \| B({\bf u_h },p_h)-l \| \). I can now say that we are looking for a solution to our problem \( ({\bf u_h },p_h) \) that the residuum (or error) would be as little as possible. Mathematically, this condition is written in the form of minimizing the norm
\( ({\bf u_h },p_h) = \displaystyle{argmin}_{({\bf w_h },s_h) \in U\times P } {1 \over 2} \| B({\bf w_h},s_h) - l \|_{({\bf V},Q)' }^2 \)
It writes in before the norm \( 1\over 2 \) and the norm is squared.
The practical problem is that we don't know how to calculate the norm from the difference \( \|B({\bf w_h },s_h)-l\| \). This standard measures distances in an abstract mathematical space \( (V\times Q)' \) (mathematicians will say that this space is dual to the space of test functions).
To solve this problem, we perform the math trick of projecting a norm from
\( (V\times Q)' \) to the testing space \( V\times Q \). A projector operator is introduced \( (V\times Q) \) into space \( (V\times Q) ' \). This operator is called the Riesz operator \( R_{(V,Q)} \colon (V\times Q) \ni (v,q) \rightarrow ( ( v,q),.) \in (V\times Q)' \). The inverse operator, on the other hand, projects the space \( (V\times Q)' \) back into space \( V\times Q \), so \( R_{(V,Q)}^{-1} \colon V' \ni (v,.) \rightarrow v \in V \). So if we use the inverse Riesz operator, we move our problem to the space of test functions \( V\times Q \). So we have
\( ({\bf u_h},p_h) = \displaystyle{argmin}_{({\bf w_h },s_h) \in (U_h\times P_h)} {1 \over 2} \| {R_{(V,Q)}^{-1}} (B({\bf w_h},s_h) - l) \|_{V\times P }^2 \). So we are looking for the point at which the function reaches its minimum. This point is reached when the derivatives of this function are equal to zero. Since we have a functional here, mathematicians will say we're looking for one \( ({\bf u_h },p_h) \) where the derivatives of Gateaux are zero in all directions
\( \langle R_{(V,Q) }^{-1} (B({\bf u_h },p_h) - l), R_{(V,Q)}^{-1 }(B,({\bf w_h },s_h) ) \rangle_V = 0 \quad \forall ({\bf w_h },s_h) \in U_h \times P_h \) where \( ({\bf w_h},s_h) \) stands for all possible "directions" from a finite dimensional function space \( U_h \times P_h \) (in which we are also looking for a solution to our Stokes problem).
Technically speaking, at this point we have to construct a finite dimensional space of functions
\( U_h\times P_h \) in which we will look for our solution. We do this of course in our textbook using the B-spline basis functions, spanning them along the knot vectors along the x-axis and along the y-axis.
Along the axis \( x \) we introduce knot vector [0 0 0 1 2 3 4 4 4], likewise along the axis \( y \) we introduce a knot vector [0 0 0 1 2 3 4 4 4].
We obtained two bases of one-dimensional B-spline basis functions \( \{ B_{i,2}(x) \}_{i=1,...,6 } \) and \( \{B_{j,2}(y)\}_{j=1,...,6 } \). Then we create tensor products from them \( B_{i,j;2}(x,y)=B_{i,2}(x)B_{j,2}(y),i,j=1,...,6 \).
Note that our area \( \Omega \) it is stretched over a square \( [0,1]^2 \), similar to our 6 * 6 = 36 basis functions which results from the definition of the knot vectors [0 0 0 1 2 2 3 4 4 4].
Our approximation space is then spanned by the tensor products of our one-dimensional B-spline functions. Now instead we have two velocity coordinates and one pressure scalar value, so we get \( U_h \times Q_h = gen \{ B_{i,j;2}(x,y)=B_{i,2}(x)B_{j,2}(y),i,j=\\=1,...,6, B_{i,j;2}(x,y)=B_{i,2}(x)B_{j,2}(y),i,j=1,...,6, B_{i,j;2}(x,y)=B_{i,2}(x)B_{j,2}(y),i,j=1,...,6 \} \).
We denote
\( r=R_{(V,Q)}^{-1}(B({\bf u_h },p_h)-l) \)
and then we can rewrite
\( \langle r , R_{(V,Q)}^{-1} (B\, ({\bf w_h },s_h) ) \rangle = 0 \quad \forall \, (w_h,s_h ) \in U_h\times P_h \)
which is synonymous \( \langle B({\bf w_h },s_h), r \rangle = 0 \quad \quad \forall (w_h,s_h ) \in U_h \times P_h. \)
We now add a second equation using the definition of the residuum \( r=B({\bf u_h },p_h)-l \), we multiply the residuum definition by the test functions from the space of test functions and we get the equation:
\( (r,v)_V=\langle B({\bf u_h },p_h)-l,(v,q) \rangle \quad \forall( v,q )\in V\times Q. \)
So we have a system of equations: finding a residual in an infinite dimensional space of test functions \( r\in V\times Q \) and solutions in a finite dimensional function space \( {\bf u_h },p_h) \in U_h\times P_h \)
\( (r,v)_V - \langle B({\bf u_h },p_h)-l ,(v,q) \rangle = 0 \quad \forall (v,q) \in V\times Q \\ \langle B({\bf w_h },s_h),r\rangle = 0 \quad \forall (w_h,s_h) \in U_h\times P_h \)
If we were able to solve this equation, we would get the best possible approximation of our problem in the space of approximating functions \( U_h\times P_h \). Unfortunately, this is not possible, because the infinite space of test functions generates an infinite number of equations that should be solved. So we have to choose a second different base to approximate the test space. Here comes the first significant difference between the Residual Minimization Method and the traditional Finite Element Method. In the method of minimizing the residuum, we have two different spaces, one for the approximation of the solution, the other for testing. Of course, the testing space is also spanned using the B-spline function. For example, we can use the same element patch for the approximating space (but this is not required!), but we increase the degree of the B-spline function.
We define what our patch of elements will look like and our B-spline basis functions spread over the elements, giving a knot vector along the axis \( x \) and knot vector along the \( y \). We refer here to the third chapter, which describes the way of defining the base functions using knot vectors and B-spline functions.
Along the axis \( x \) we introduce a knot vector [0 0 0 1 2 3 4 4 4], likewise along the axis \( y \) we introduce a knot vector [0 0 0 1 2 3 4 4 4]. So we use C1-continuous B-splines for approximation.
For testing we will increase their degree, and leave C1 continuity. Along the axis \( x \) we introduce a knot vector [0 0 0 0 1 1 2 2 3 3 4 4 4 4], likewise along the axis \( y \) we introduce a knot vector [0 0 0 0 1 1 2 2 3 3 4 4 4 4].
Generally, the test spaces must be larger than the approximation space. We can achieve this effect by increasing the number of elements or increasing the degree of the B-spline function, or reducing the continuity of the B-spline function basis, or by mixing the three approaches together. The approximation space should be a subspace of the testing space.
We obtain two bases of one-dimensional B-spline functions used for testing \( \{ \tilde{B} _{k,3}(x) \}_{k=1,...,10 } \) and \( \{\tilde{B}_{l,3}(y) \}_{l=1,...,7 } \). Then we will create tensor products \( \tilde{B}_{k,l;3}(x,y)={\tilde{B}}_{k,3}(x){\tilde{B}}_{l,3}(y),k,l=1,...,10 \).
Our testing space, taking into account two test function coordinates for the velocity field and one test function coordinate for pressure, is \( V_h\times Q_h = gen \{ \tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y),i,j=1,...,10; \tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y),i,j=1,...,10; \tilde{B}_{k,3}(x)B_{l,3}(y),i,j=1,...,10 \} \).
Having a discrete testing space, we finally obtain our discrete (so finite dimensional) system of equations that we want to solve.
We are looking for \( ( r_m, ({\bf u_h },p_h) )_{ V_m \times U_h \times P_h } \)
\( (r_m,({\bf v_m }, q_h))_{V_m \times Q_h } - \langle B ({\bf u_h },p_h)-l,({\bf v_m }, q_m) \rangle = 0 \quad \forall ({\bf v_m },q_m) \in V_m \times Q_m \\ \langle B({\bf w_h }, q_h),r_m\rangle = 0 \quad \forall ({\bf w_h },q_h) \in U_h\times P_h \)
This is where the so-called inner product appears in a discrete testing space \( (*,*)_{V_m \times Q_M } \).
To guess what dot product to use in our problem, we need to look at our original equation in the weak form \( B( ({\bf u}p),({\bf v}, q)) = \left( \begin{matrix} a({\bf u}, {\bf v}) + b(p, {\bf v}) \\ b(q, {\bf u } ) \\ \end{matrix} \right) \)
and see what form the test functions are in \( ({\bf v },q) \) in its components
\( a({\bf u}, {\bf v }) =\int_{\Omega} \frac{\partial u_1}{\partial x}\frac{\partial v_1}{\partial x}dxdy + \int_{\Omega} \frac{\partial u_1}{\partial y}\frac{\partial v_1}{\partial y}dxdy + \int_{\Omega} \frac{\partial u_2}{\partial x}\frac{\partial v_2}{\partial x}dxdy + \int_{\Omega} \frac{\partial u_2}{\partial y}\frac{\partial v_2}{\partial y }dxdy \)
\( b(p, {\bf v }) = \int_{\Omega}p \frac{\partial v_1}{\partial x} dxdy+ \int_{\Omega} p \frac{\partial v_2}{\partial y } dxdy \)
Our innner product should therefore contain derivatives of functions \( v_1,v_2 \) and function values \( q \).
\( ( ({\bf u },p), ({\bf v },q))_{V_m\times Q_m} =\\= \int_{\Omega} p(x,y) q(x,y) dxdy + \int_{\Omega} \frac{\partial u_1(x,y)}{\partial x } \frac{\partial v_1(x,y) }{\partial x }dxdy + \int_{\Omega} \frac{\partial u_1(x,y)}{\partial y } \frac{\partial v_1(x,y)}{\partial y } + \int_{\Omega} \frac{\partial u_2(x,y)}{\partial x } \frac{\partial v_2(x,y)}{\partial x }dxdy + \int_{\Omega} \frac{\partial u_2(x,y)}{\partial y } \frac{\partial v_2(x,y)}{\partial y }dxdy \)
In conclusion, in the residual minimization method, we need to define a separate approximation space, and a separate (larger) testing space, and choose the inner product of the testing space. We then get a system of equations in which the unknown is our solution \( ({\bf u },p) \) and additionally, a residuum \( r \).
How well the minimization residual works will largely depend on our choice of the test space and the inner product. If the inner product is sufficiently accurate, and the test space is large enough, everything will work perfectly well, and we will get the best possible solution in the approximation space \( U_h \) (but not better than the approximation space allows). The mathematical justification is that the residual minimization problem with an infinite-dimensional test space realizes the inf-sup stability condition with a constant equal to 1. If the testing space is narrowed down to a finite-dimensional space, the inf-sup condition may no longer be perfectly realized in this space. Increasing the testing space will improve the feasibility of the inf-sup stability condition.
Our system of equations to be solved by the residual minimization method is as follows
\( \left( \begin{matrix} G & B \\ B^T & 0 \\ \end{matrix} \right) \left( \begin{matrix} r \\ u \end{matrix} \right)= \left( \begin{matrix} f \\ 0 \end{matrix} \right) \)

Let's see what our system of equations looks like on the example of specific approximation and testing spaces.

We now define the functions used to approximate the solution and for testing.
We define what our patch of elements will look like and our B-spline basis functions spread over elements, giving a vector of nodes along the axis \( x \) and vector of nodes along the axis \( y \). We refer here to the third chapter, which describes the way of defining the base functions using knot vectors and B-spline functions.
Along the axis \( x \) we introduce a knot vector [0 0 0 1 2 3 4 4 4], likewise along the axis \( y \) we introduce a knot vector [0 0 0 1 2 3 4 4 4].
We obtained two bases of one-dimensional B-spline functions \( \{ B_{i,2}(x) \}_{i=1,...,6 } \) and \( \{B_{j,2}(y)\}_{j=1,...,6 } \). Then we create tensor products from them \( B_{i,j;2}(x,y)=B_{i,2}(x)B_{j,2}(y),i,j=1,...,6 \).
We assume our area \( \Omega \) it is stretched over a square \( [0,1]^2 \), similar to our 6 * 6 = 36 basis functions which results from the definition of the knot vector [0 0 0 1 2 2 3 4 4 4].
We now take our approximation of the velocity fields
\( u_1(x,y) \approx \sum_{i,j=1,...,6} u_1^{i,j } B_{i,2}(x)B_{j,2}(y) \)
\( u_2(x,y) \approx \sum_{i,j=1,...,6} u_2^{i,j } B_{i,2}(x)B_{j,2}(y) \)
and pressure
\( p(x,y) \approx \sum_{i,j=1,...,6} p^{i,j } B_{i,2}(x)B_{j,2}(y) \)
In the residual minimization method, a different, larger space is used for testing than for the approximation. So let us assume that we unbind the testing functions with a knot vector [0 0 0 0 1 1 2 2 3 3 4 4 4 4] along the x-axis, likewise along the y-axis we introduce a knot vector [0 0 0 0 1 1 2 2 3 3 4 4 4 4]. In other words, for approximation we use B-spline functions of quadratic C1 continuity; for testing we use B-spline functions of the third order also with C1 continuity. Mathematicians say that it is good for the approximation space to be a sub-space of the testing space.
Now, we obtain two bases of one-dimensional B-spline functions
\( \{ \tilde{B}_{i,3}(x) \}_{i=1,...,10 } \) and \( \{\tilde{B}_{j,3}(y)\}_{j=1,...,10 } \). We create tensor products \( \tilde{B}_{i,j;3}(x,y)=\tilde{B}_{i,2}(x)\tilde{B}_{j,3}(y),i,j=1,...,6 \).
So for testing we use a combination of linear B-spline functions from the test spacer \( v_1(x,y) \approx \sum_{k,l=1,...,10} v_1^{k,l } \tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y) \)
\( v_2(x,y) \approx \sum_{k,l=1,...,10} v_2^{k,l } \tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y) \)
and pressure
\( q(x,y) \approx \sum_{k,l=1,...,10} q^{k,l } \tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y) \)
Now note that the inner product matrix \( G \) it is stretched over the testing space. Also note that our problem matrix \( B \) is spread over rows from the testing space and columns from the approximation space.
Let us first write the matrix of our problem
\( B \) .
We now take functions first \( ({\bf v },q) = (v_1,0,0)=(\tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y),0,0) \textrm{ dla }k=l,...,10 \) and put into the equation ( 1 ). We get matrices
\( A1_{i,j=1,...,6;k,l=1,...,10} \int_{\Omega} \frac{B_{i,2}(x)}{\partial x}B_{j,2}(y)\frac{\partial \tilde{B}_{k,3}(x) }{\partial x}\tilde{B}_{l,3 }(y)dxdy + \int_{\Omega} B_{i,2}(x)\frac{\partial B_{j,2}(y)}{\partial y}\tilde{B}_{k,3}(x)\frac{\partial \tilde{B}_{l,3}(y)}{\partial y }dxdy \)
\( B1_{i,j=1,...,6;k,l=1,...,10} = \int_{\Omega} \frac{\partial B_{i,2}(x) }{\partial x } B_{j,2}(y) \tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y) \)
Then we take base functions \( ({\bf v },q) = (0,v_1,0)=(0,\tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y),0) \textrm{ dla }k=l,...,10 \) and put into the equation ( 1 ). We get matrices
\( A2_{i,j=1,...,6;k,l=1,...,10}=\int_{\Omega} \frac{\partial B_{i,2}(x)}{\partial x} B_{j,2}(y)\frac{\partial B_{k,3}(x)}{\partial x}B_{l,3}(y)dxdy + \int_{\Omega} B_{i,2}(x)\frac{\partial B_{j,2}(y)}{\partial y }\tilde{B}_{k,3}(y)\frac{\partial \tilde{B}_{l,3}(x)}{\partial y }dxdy \)
\( B2_{i,j=1,...,6;k,l=1,...,10} =\int_{\Omega} B_{i,2}(x)B_{j,2}(y) \tilde{B}_{k,3}(x)\frac{\partial \tilde{B}_{l,3}(y)}{\partial y } dxdy \)
Additionally, assuming base functions \( ({\bf v },q) = (0,0,q)=(0,0,\tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y)) \textrm{ dla }k=l,...,10 \) and put into the equation ( 1 ). We get matrices
\( Q1_{i,j=1,...,6;k,l=1,...,10} =\int_{\Omega } \frac{\partial B_{i,2}(x)}{\partial x} B_{j,2}(y) \tilde{B}_{k,3}(x) \tilde{B}_{l,3}(y) dxdy \)
\( Q2_{i,j=1,...,6;k,l=1,...,10} =\int_{\Omega} B_{i,2}(x)\frac{\partial B_{j,2}(y)}{\partial y}\tilde{B}_{k,3 }(x)\tilde{B}_{l,3 }(y) dxdy \)
and right-hand side \( F1_{k,l=1,...,10}= \int_{\Omega} \frac{\partial h(x,y)}{\partial x}\frac{\partial \tilde{B}_{k,3}(x)}{\partial x } \tilde{B}_{l,3}(y)dxdy +\int_{\partial \Omega} \frac{\partial h(x,y)}{\partial n } \tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y)dS \)
\( F2_{k,l=1,...,10}= \int_{\Omega} \frac{\partial h(x,y)}{\partial y}\tilde{B}_{k,3}(x)\frac{\partial \tilde{B}_{l,3}(y)}{\partial y} dxdy +\int_{\partial \Omega} \frac{\partial h(x,y)}{\partial n } \tilde{B}_{k,3}(x)\tilde{B}_{l,3}(y)dS \)
Our matrix looks like this \( \left( \begin{matrix} A1 & 0 & B1 \\ 0 & A2 & B2 \\ Q1 & Q2 & 0 \end{matrix} \right) \)
Note that now it IS NOT a square matrix (we have a different number of rows and columns).
Let us first write the matrix of our problem \( G \).
\( G1_{i,j=1,...,10;k,l=1,...,10} =\int_{\Omega }\tilde{B}_{i,3}(x)\tilde{B}_{j,3}(y) \tilde{B}_{k,3}(x) \tilde{B}_{l,3 }(y) dxdy \)
\( G2_{i,j=1,...,10;k,l=1,...,10} =\int_{\Omega }\frac{\partial \tilde{B}_{i,3}(x)}{\partial x} \tilde{B}_{j,3}(y) \frac{\partial \tilde{B}_{k,3}(x)}{\partial x} \tilde{B}_{l,3 }(y) dxdy \)
\( G3_{i,j=1,...,10;k,l=1,...,10} =\int_{\Omega }\tilde{B}_{i,3}(x)\frac{\partial \tilde{B}_{j,3}(y)}{\partial y} \tilde{B}_{k,3}(x) \frac{\partial \tilde{B}_{l,3 }(y)}{\partial y } dxdy \)
Our matrix looks like this \( \left( \begin{matrix} G1 & 0 & 0 \\ 0 & G2+G3 & 0 \\ 0 & 0 & G2+G3 \end{matrix} \right) \)

Note that now it IS a square matrix with the number of rows and columns corresponding to the size of the testing space.

Ostatnio zmieniona Środa 06 z Lipiec, 2022 08:43:43 UTC Autor: Maciej Paszynski
Zaloguj się/Zarejestruj w OPEN AGH e-podręczniki
Czy masz już hasło?

Hasło powinno mieć przynajmniej 8 znaków, litery i cyfry oraz co najmniej jeden znak specjalny.

Przypominanie hasła

Wprowadź swój adres e-mail, abyśmy mogli przesłać Ci informację o nowym haśle.
Dziękujemy za rejestrację!
Na wskazany w rejestracji adres został wysłany e-mail z linkiem aktywacyjnym.
Wprowadzone hasło/login są błędne.